This contains the participants with good fits only

First, comparing the prior

Our first question is what the estimated prior looked like, and how similar it was in aggregate to the one GIVEN to them. Looks like they are very similar, so yay!

Next, comparing the posteriors

Now we want to look at the aggregate posterior results in the ESTIMATED vs GIVEN conditions. Looks like they may be using the prior a bit more when it is GIVEN!

Looking at fitted values: \(\beta\), \(\delta\), and \(\gamma\)

Let’s have a look at the distribution of parameters in Experiment 3. As a reminder:

\(\beta\): determines to what degree participants use their stated prior, and to what degree they use the uniform distribution to deduce their posterior. If \(\beta=1\) that means it’s all prior, if \(\beta=0\) it is all uniform distribution

\(\gamma\): how much they weighted the red chips they saw. if \(\gamma=1\) it is veridical, lower is underweighting, higher is overweighting.

\(\delta\): how much they weighted the blue chips they saw. if \(\delta=1\) it is veridical, lower is underweighting, higher is overweighting.

And also a 3D plot

Finally, let’s look at histograms of all of the variables.

Looking at fitted values: beta alone

I wanted to see if the beta values would be different if we weren’t also fitting delta and gamma. So let’s look at that histogram.

For the priors: 38.5% in ESTIMATED and 18.3% in GIVEN had \(\beta\) less than 0.1, and 22.1% in ESTIMATED and 52.3% in GIVEN had \(\beta\) greater than 0.9.

Looking at fitted values: \(\gamma\) alone

I wanted to see if the \(\gamma\) values would be different if we weren’t also fitting \(\delta\) and \(\beta\) So let’s look at that histogram.

For the likelihoods: 64.2% in GIVEN and 41.3% in ESTIMATED had \(\gamma\) less than 1 (i.e., were conservative).

Looking at fitted values: \(\beta\) and \(\gamma\) alone

Here we assume one parameter (call it \(\gamma\)) instead of two separate ones (\(\gamma\) and \(\delta\)). i.e. this forces them to be the same and can be thought of as a conservatism parameter. I wanted to look at this because I think it might be a lot more interpretable than having both.

\(\beta\): determines to what degree participants use their stated prior, and to what degree they use the uniform distribution to deduce their posterior. If \(\beta=1\) that means it’s all prior, if \(\beta=0\) it is all uniform distribution

\(\gamma\): how much they weighted the chips they saw. if \(\gamma=1\) it is veridical, lower is underweighting, higher is overweighting.

Let’s calculate the Spearman correlation:

# estimated
cor.test(de3_fittwo$beta[de3_fittwo$condition=="Esᴛɪᴍᴀᴛᴇᴅ"],
         de3_fittwo$gamma[de3_fittwo$condition=="Esᴛɪᴍᴀᴛᴇᴅ"],
         method="spearman")
## Warning in cor.test.default(de3_fittwo$beta[de3_fittwo$condition ==
## "Esᴛɪᴍᴀᴛᴇᴅ"], : Cannot compute exact p-value with ties
## 
##  Spearman's rank correlation rho
## 
## data:  de3_fittwo$beta[de3_fittwo$condition == "Esᴛɪᴍᴀᴛᴇᴅ"] and de3_fittwo$gamma[de3_fittwo$condition == "Esᴛɪᴍᴀᴛᴇᴅ"]
## S = 177373, p-value = 0.5875
## alternative hypothesis: true rho is not equal to 0
## sample estimates:
##        rho 
## 0.05380852
# given
cor.test(de3_fittwo$beta[de3_fittwo$condition=="Gɪᴠᴇɴ"],
         de3_fittwo$gamma[de3_fittwo$condition=="Gɪᴠᴇɴ"],
         method="spearman")
## Warning in cor.test.default(de3_fittwo$beta[de3_fittwo$condition == "Gɪᴠᴇɴ"], :
## Cannot compute exact p-value with ties
## 
##  Spearman's rank correlation rho
## 
## data:  de3_fittwo$beta[de3_fittwo$condition == "Gɪᴠᴇɴ"] and de3_fittwo$gamma[de3_fittwo$condition == "Gɪᴠᴇɴ"]
## S = 229190, p-value = 0.5222
## alternative hypothesis: true rho is not equal to 0
## sample estimates:
##         rho 
## -0.06194754

And a histogram of them too

For the priors: 34.6% in ESTIMATED and 22% in GIVEN had \(\beta\) less than 0.1, and 26.9% in ESTIMATED and 39.4% in GIVEN had \(\beta\) greater than 0.9.

For the likelihoods: 68.8% in GIVEN and 48.1% in ESTIMATED had \(\gamma\) less than 1 (i.e., were conservative).

Individuals: ESTIMATED condition

So now let’s look at individuals - compare their prior and posterior after five, based on the best-fit \(\beta\) and \(\gamma\)

red line: their reported prior

dark purple line: their reported posterior

solid grey: Bayes rule prediction assuming their prior

dotted black: line based on best-fit \(\beta\) and \(\gamma\)

Individuals: GIVEN condition

We’re going to do the same thing as before but this time with people in the GIVEN condition

red line: the given prior

dark blue line: their reported posterior after unlimited

solid grey: Bayes rule prediction assuming the given prior

dotted black: line based on best-fit \(\beta\) and \(\gamma\)

We can also get a sense of how good the fits were.

## TableGrob (1 x 2) "arrange": 2 grobs
##   z     cells    name           grob
## 1 1 (1-1,1-1) arrange gtable[layout]
## 2 2 (1-1,2-2) arrange gtable[layout]

Aggregate fits

We can also look at the parameter values for the aggregate fits. First, GIVEN.

Then, ESTIMATED